home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
ftp.cs.arizona.edu
/
ftp.cs.arizona.edu.tar
/
ftp.cs.arizona.edu
/
icon
/
newsgrp
/
group92c.txt
/
000008_icon-group-sender _Mon Oct 5 12:16:43 1992.msg
< prev
next >
Wrap
Internet Message Format
|
1993-01-04
|
2KB
Received: by cheltenham.cs.arizona.edu; Mon, 5 Oct 1992 14:14:16 MST
Message-Id: <199210051618.AA00842@optima.cs.arizona.edu>
From: keil@apollo.hp.com
Date: Mon, 5 Oct 1992 12:16:43 -0400
In-Reply-To: "Richard L. Goerwitz" <goer@midway.uchicago.edu>
"storing objects" (Oct 3, 9:13pm)
X-Mailer: Mail User's Shell (7.2.4 2/2/92)
To: "Richard L. Goerwitz" <goer@midway.uchicago.edu>,
icon-group@cs.arizona.edu
Subject: Re: storing objects
Status: R
Errors-To: icon-group-errors@cs.arizona.edu
>Let's say that I have a large table that I want to store and recreate
>at run-time in a program. What is the best way of doing this? What
>I mean by "best" is primarily "fastest," and not "smallest."
>Any better ideas?
>-Richard
What is large? 1k, 10k, 100k elements?
With the old scheme of non-expandable tables, I found that increasing
the initial table size had a big effect for tables of over a thousand
elements. I'm not sure where the breakover point is for the newer
expandable table mechanism, but I suspect that if you make the initial
table size at least as big as, if not twice that size of, your table,
that things will go faster. At one point, back in the version sevens,
I added a second parameter to table [table(x,size)] to set the initial
size of the tablas needed. This seemed more elegant that just always using
a larger table. On the other hand, with virtual memory systems,
You may not care if the initial table size is always 10k elememts (if you
decide to just staticly increase the initial allocation size.)
-Mark